Strong completeness and faithfulness in Bayesian networks

نویسنده

  • Christopher Meek
چکیده

A completeness result for d-separation ap­ plied to discrete Bayesian networks is pre­ sented and it is shown that in a strong measure-theoretic sense almost all discrete distributions for a given network structure are faithful; i.e. the independence facts true of the distribution are all and only those en­ tailed by the network structure.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Identifiable Gaussian Bayesian Networks in Polynomial Time and Sample Complexity

Learning the directed acyclic graph (DAG) structure of a Bayesian network from observational data is a notoriously difficult problem for which many hardness results are known. In this paper we propose a provably polynomial-time algorithm for learning sparse Gaussian Bayesian networks with equal noise variance — a class of Bayesian networks for which the DAG structure can be uniquely identified ...

متن کامل

d-Separation: Strong Completeness of Semantics in Bayesian Network Inference

We present an algorithm, called Semantics in Inference (SI), that uses d-separation to denote the semantics of every potential constructed during exact inference in discrete Bayesian networks. We establish that SI possesses four salient features, namely, polynomial time complexity, soundness, completeness, and strong completeness. SI provides a better understanding of the theoretical foundation...

متن کامل

Lifted Representation of Relational Causal Models Revisited: Implications for Reasoning and Structure Learning

Maier et al. (2010) introduced the relational causal model (RCM) for representing and inferring causal relationships in relational data. A lifted representation, called abstract ground graph (AGG), plays a central role in reasoning with and learning of RCM. The correctness of the algorithm proposed by Maier et al. (2013a) for learning RCM from data relies on the soundness and completeness of AG...

متن کامل

Monotone DAG Faithfulness: A Bad Assumption

In a recent paper, Cheng, Greiner, Kelly, Bell and Liu (Artificial Intelligence 137:4390; 2002) describe an algorithm for learning Bayesian networks that—in a domain consisting of n variables—identifies the optimal solution using O(n) calls to a mutual-information oracle. This seemingly incredible result relies on (1) the standard assumption that the generative distribution is Markov and faithf...

متن کامل

How to Tackle an Extremely Hard Learning Problem: Learning Causal Structures from Non-Experimental Data without the Faithfulness Assumption or the Like

Most methods for learning causal structures from non-experimental data rely on some assumptions of simplicity, the most famous of which is known as the Faithfulness condition. Without assuming such conditions to begin with, we develop a learning theory for inferring the structure of a causal Bayesian network, and we use the theory to provide a novel justification of a certain assumption of simp...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995